Goto

Collaborating Authors

 ai legislation


How Trump's Bid to Crush State AI Laws Splits His Own Party

TIME - Tech

Donald Trump, center, signs a an executive order on artificial intelligence in the Oval Office on December 11. He is joined by, from left, AI advisor Sriram Krishnan, Senator Ted Cruz, Commerce Secretary Howard Lutnick, and AI and crypto czar David Sacks. Donald Trump, center, signs a an executive order on artificial intelligence in the Oval Office on December 11. He is joined by, from left, AI advisor Sriram Krishnan, Senator Ted Cruz, Commerce Secretary Howard Lutnick, and AI and crypto czar David Sacks. Last week, President Donald Trump signaled his allegiance to the AI industry yet again by signing an executive order that aims to block states from regulating AI.


Congress May Finally Take on AI in 2025. Here's What to Expect

TIME - Tech

AI tools rapidly infiltrated peoples' lives in 2024, but AI lawmaking in the U.S. moved much more slowly. While dozens of AI-related bills were introduced this Congress--either to fund its research or mitigate its harms--most got stuck in partisan gridlock or buried under other priorities. In California, a bill aiming to hold AI companies liable for harms easily passed the state legislature, but was vetoed by Governor Gavin Newsom. This inaction has some AI skeptics increasingly worried. "We're seeing a replication of what we've seen in privacy and social media: of not setting up guardrails from the start to protect folks and drive real innovation," Ben Winters, the director of AI and data privacy at the Consumer Federation of America, tells TIME.


'No consensus': House backs off of push for large-scale AI regulations

FOX News

Alex Galvagni, CEO of Age of Learning and a former artificial intelligence researcher with NASA, says advances in AI now make it possible to deliver to children'a personalized and supportive' experience in education. The House of Representatives will likely not take up legislation this year to establish a large-scale framework for the artificial intelligence (AI) industry. House Majority Leader Steve Scalise, R-La., told Fox News Digital that AI development was in a stage where he was concerned that over-burdensome regulations could make the U.S. fall behind competitors like China. "There's no consensus right now," Scalise said when asked about the likelihood of AI legislation. "Frankly, we shouldn't be having some new regulatory structure, billions of taxpayer money, to do what the private sector is already doing. You know, and AI is a great example of how America's leading the world in innovation, we don't need to limit that growth by throwing a whole lot of new regulations on top of it to solve a problem that doesn't exist."


Worried about AI? How California lawmakers plan to tackle the technology's risks in 2024

Los Angeles Times

Jodi Long was caught off guard by the cage filled with cameras meant to capture images of her face and body. "I was a little freaked out because, before I walked in there, I said I don't remember this being in my contract," the actor said. The filmmakers needed her digital scan, Long was told, because they wanted to make sure her arms were positioned correctly in a scene where she holds a computer-generated character. That moment in 2020 stuck with Long, president of SAG-AFTRA's Los Angeles local, while she was negotiating for protections around the use of artificial intelligence when actors went on strike. In November, the actors guild reached a deal with Hollywood studios that -- among other things -- required consent and compensation for the use of a worker's digital replica.


The US Senate and Silicon Valley reconvene for a second AI Insight Forum

Engadget

Senator Charles Schumer (D-NY) once again played host to Silicon Valley's AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute and NAACP CEO Derrick Johnson. On the agenda: "the transformational innovation that pushes the boundaries of medicine, energy, and science, and the sustainable innovation necessary to drive advancements in security, accountability, and transparency in AI," according to a release from Sen. Schumer's office. Upon exiting the meeting Tuesday, Schumer told the assembled press, "it is clear that American leadership on AI can't be done on the cheap. Almost all of the experts in today's Forum called for robust, sustained federal investment in private and public sectors to achieve our goals of American-led transformative and sustainable innovation in AI. Per National Security AI Commission estimates, paying for that could cost around $32 billion a year. However, Schumer believes that those funding challenges can be addressed by "leveraging the private sector by employing new and innovative funding mechanisms – like the Grand Challenges prize idea." "We must prioritize transformational innovation, to help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," Schumer remarked. But in doing so, we must act sustainably in order to minimize harms to workers, civil society and the environment. "We need to strike a balance between transformational and sustainable innovation," Schumer said. "Finding this balance will be key to our success." Senators Brian Schatz (D-HI) and John Kennedy (R-LA) also got in on the proposed regulatory action Tuesday, introducing legislation that would provide more transparency on AI-generated content by requiring clear labeling and disclosures. Such technology could resemble the Content Credentials tag that the C2PA and CAI industry advocacy groups are developing. "Our bill is simple," Senator Schatz said in a press statement. "If any content is made by artificial intelligence, it should be labeled so that people are aware and aren't fooled or scammed." The Schatz-Kennedy AI Labeling Act, as they're calling it, would require generative AI system developers to clearly and conspicuously disclose AI-generated content to users. Those developers, and their licensees, would also have to take "reasonable steps" to prevent "systematic publication of content without disclosures." The bill would also establish a working group to create non-binding technical standards to help social media platforms automatically identify such content as well. "It puts the onus where it belongs: on the companies and not the consumers," Schatz said on the Senate floor Tuesday. "Labels will help people to be informed.


UK needs AI legislation to create trust so companies can 'plug AI into British economy' – report

AIHub

The British government should offer tax breaks for businesses developing AI-powered products and services, or applying AI to their existing operations, to "unlock the UK's potential for augmented productivity", according to a new University of Cambridge report. Researchers argue that the UK currently lacks the computing capacity and capital required to build "generative" machine learning models fast enough to compete with US companies such as Google, Microsoft or Open AI. Instead, they call for a UK focus on leveraging these new AI systems for real-world applications – such as developing new diagnostic products and addressing the shortage of software engineers, for example – which could provide a major boost to the British economy. However, the researchers caution that without new legislation to ensure the UK has solid legal and ethical AI regulation, such plans could falter. British industries and the public may struggle to trust emerging AI platforms such as ChatGPT enough to invest time and money into skilling up. The policy report is a collaboration between Cambridge's Minderoo Centre for Technology and Democracy, Bennett Institute for Public Policy, and ai@cam: the University's flagship initiative on artificial intelligence.


An inside look at Congress's first AI regulation forum

MIT Technology Review

The AI Insight Forums were announced a few months ago by Senate Majority Leader Chuck Schumer as part of his "SAFE Innovation" initiative, which is really a set of principles for AI legislation in the United States. The invite list was heavily skewed toward Big Tech execs, including CEOs of AI companies, though a few civil society and AI ethics researchers were included too. Coverage of the meeting thus far has put a particular emphasis on the reportedly unanimous agreement about the need for AI regulation, and on issues raised by Elon Musk and others about the "civilizational risks" created by AI. (This tracker from Tech Policy Press is pretty handy if you want to know more.) But to really dig below the surface, I caught up with one of the other attendees, Inioluwa Deborah Raji, who gave me an inside look at how the first meeting went, the pernicious myths she needed to debunk, and where disagreements could be felt in the room. Raji is a researcher at the University of California, Berkeley, and a fellow at Mozilla.

  Country: North America > United States > California > Alameda County > Berkeley (0.26)
  Genre: Personal > Interview (0.58)
  Industry:

Exclusive: California Bill Proposes Regulating AI at State Level

TIME - Tech

A senior California lawmaker will introduce a new artificial intelligence (AI) bill to the state's senate on Wednesday, adding to national and global efforts to regulate the fast-accelerating technology. Although there are several attempts in Congress to draft AI legislation, the state of California--home to Silicon Valley, where most of the world's top AI companies are based--has a role to play in setting guardrails on the industry, according to state Senator Scott Wiener, (D--San Francisco) who drafted the bill. "In an ideal world we would have a strong federal AI regulatory scheme," Wiener said in an interview with TIME on Tuesday, adding that he supports attempts in Congress and the White House to regulate the technology. "But California has a history of acting when the federal government is moving either too slowly or not acting." He added: "We need to get ahead of these risks, not do what we've done in the past around social media or other technology, where we do nothing before it's potentially too late."


Biden promises more AI laws, executive actions: 'We have a lot more work to do'

FOX News

Center for A.I. Safety Director Dan Hendrycks explains concerns about how the rapid growth of artificial intelligence could impact society. President Biden said Friday that his White House would continue to put out executive actions aimed at regulating and guiding the use of artificial intelligence but also said those actions won't end the need for Congress to pass AI legislation. "These commitments are a promising step, but we have a lot more work to do together," Biden said at the White House as he announced that seven AI development companies would work within a voluntary set of guidelines aimed at creating safe, secure and trustworthy AI systems. "Realizing the promise of AI by managing the risks is going to require new laws, regulations and oversight," Biden added. "In the weeks ahead, I'm going to continue to take executive action and help America lead the way to responsible innovation."


Sen. Hawley introduces 'guiding principles' on future AI legislation, weeks after Senate hearing

FOX News

OpenAI CEO Sam Altman, the artificial intelligence lab behind ChatGPT, took questions from reporters following his congressional hearing, including defining "scary AI." Sen. Josh Hawley, R-Mo, unveiled a set of "guiding principles" ahead of any future artificial intelligence legislation Wednesday, seeking to "protect Americans' privacy" as the technology continues to develop. The Republican senator outlined five principles, first reported by Axios, aimed to "help set the course for the responsible development of American AI," as lawmakers figure out how to deal with current and future advancements. "Congress can and should act to protect Americans' privacy, stave off the harms of unchecked AI development, insulate kids from harmful impacts, and keep this valuable technology out of the hands of our adversaries," Hawley said in a statement. The recent leaps in easily-accessible AI technology like ChatGPT have led both lawmakers and industry leaders to recognize the need for regulation.